471 research outputs found
Detecting and localizing edges composed of steps, peaks and roofs
It is well known that the projection of depth or orientation
discontinuities in a physical scene results in image
intensity edges which are not ideal step edges but
are more typically a combination of steps, peak and
roof profiles. However most edge detection schemes
ignore the composite nature of these edges, resulting
in systematic errors in detection and localization. We
address the problem of detecting and localizing these
edges, while at the same time also solving the problem
of false responses in smoothly shaded regions with
constant gradient of the image brightness. We show
that a class of nonlinear filters, known as quadratic
filters, are appropriate for this task, while linear filters
are not. A series of performance criteria are derived
for characterizing the SNR, localization and multiple
responses of these filters in a manner analogous to
Canny's criteria for linear filters. A two-dimensional
version of the approach is developed which has the
property of being able to represent multiple edges at the
same location and determine the orientation of each
to any desired precision. This permits junctions to be
localized without rounding. Experimental results are
presented
Preattentive texture discrimination with early vision mechanisms
We present a model of human preattentive texture perception. This model consists of three stages: (1) convolution of the image with a bank of even-symmetric linear filters followed by half-wave rectification to give a set of responses modeling outputs of V1 simple cells, (2) inhibition, localized in space, within and among the neural-response profiles that results in the suppression of weak responses when there are strong responses at the same or nearby locations, and (3) texture-boundary detection by using wide odd-symmetric mechanisms. Our model can predict the salience of texture boundaries in any arbitrary gray-scale image. A computer implementation of this model has been tested on many of the classic stimuli from psychophysical literature. Quantitative predictions of the degree of discriminability of different texture pairs match well with experimental measurements of discriminability in human observers
A network for multiscale image segmentation
Detecting edges of objects in their images is a basic problem in computational vision. The scale-space technique introduced by Witkin [11] provides means of using local and global reasoning in locating edges. This approach has a major drawback: it is difficult to obtain accurately
the locations of the 'semantically meaningful' edges. We have refined the definition of scale-space, and introduced a class of algorithms for implementing it based on using anisotropic diffusion [9]. The algorithms involves simple, local operations replicated over the image making parallel
hardware implementation feasible. In this paper we present the
major ideas behind the use of scale space, and anisotropic diffusion for edge detection, we show that anisotropic diffusion can enhance edges, we suggest a network implementation of anisotropic diffusion, and provide
design criteria for obtaining networks performing scale space, and edge detection. The results of a software implementation are shown
Scale-space and edge detection using anisotropic diffusion
The scale-space technique introduced by Witkin involves generating coarser resolution images by convolving the original image with a Gaussian kernel. This approach has a major drawback: it is difficult to obtain accurately the locations of the “semantically meaningful” edges at coarse scales. In this paper we suggest a new definition of scale-space, and introduce a class of algorithms that realize it using a diffusion process. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing in preference to interregion smoothing. It is shown that the “no new maxima should be generated at coarse scales” property of conventional scale space is preserved. As the region boundaries in our approach remain sharp, we obtain a high quality edge detector which successfully exploits global information. Experimental results are shown on a number of images. The algorithm involves elementary, local operations replicated over the image making parallel hardware implementations feasible
A computational model of texture segmentation
An algorithm for finding texture boundaries in images is developed on the basis of a computational model of human texture perception. The model consists of three stages: (1) the image is convolved with a bank of even-symmetric linear filters followed by half-wave rectification to give a set of responses; (2) inhibition, localized in space, within and among the neural response profiles results in the suppression of weak responses when there are strong responses at the same or nearby locations; and (3) texture boundaries are detected using peaks in the gradients of the inhibited response profiles. The model is precisely specified, equally applicable to grey-scale and binary textures, and is motivated by detailed comparison with psychophysics and physiology. It makes predictions about the degree of discriminability of different texture pairs which match very well with experimental measurements of discriminability in human observers. From a machine-vision point of view, the scheme is a high-quality texture-edge detector which works equally on images of artificial and natural scenes. The algorithm makes the use of simple local and parallel operations, which makes it potentially real-time
- …